parent chunk
Hierarchical Re-ranker Retriever (HRR)
Singh, Ashish, Mohapatra, Priti
Retrieving the right level of context for a given query is a perennial challenge in information retrieval--too large a chunk dilutes semantic specificity, while chunks that are too small lack broader context. This paper introduces the Hierarchical Re-ranker Retriever (HRR), a framework designed to achieve both fine-grained and high-level context retrieval for large language model (LLM) applications. In HRR, documents are split into sentence-level and intermediate-level (512 tokens) chunks to maximize vector-search quality for both short and broad queries. We then employ a reranker that operates on these 512-token chunks, ensuring an optimal balance--neither too coarse nor too fine--for robust relevance scoring. Finally, top-ranked intermediate chunks are mapped to parent chunks (2048 tokens) to provide an LLM with sufficiently large context. We compare HRR against three widely used alternatives(details of them can be found in appendix section): 1. Base Retriever + Reranker 2. ChildToParent(C2P) Retriever + Reranker 3. SentenceToParent(S2P) Retriever + Reranker Experiments on two datasets--Yojana and Lendryl--demonstrate that HRR consistently outperforms these baselines in both Hit Rate (HR) and Mean Reciprocal Rank (MRR). On Yojana, HRR achieves a perfect 100% Hit Rate and an MRR of 96.15% which is 25% higher than Base Retriever and around 15% higher than C2P or S2P retriever. Similarly, on Lendryl, HRR attains MRR which is 20% and 10% higher than Base Retriever and C2P or S2P retriever respectively. These results confirm that a multi-stage retrieval strategy--fine-grained sentence-level and intermediate level(512 token) filtering, optimized 512 token reranking, and final parent-chunk(2048 token) mapping--delivers more accurate, context-rich retrieval well-suited for downstream LLM tasks.
Passage Segmentation of Documents for Extractive Question Answering
Liu, Zuhong, Simon, Charles-Elie, Caspani, Fabien
Retrieval-Augmented Generation (RAG) has proven effective in open-domain question answering. However, the chunking process, which is essential to this pipeline, often receives insufficient attention relative to retrieval and synthesis components. This study emphasizes the critical role of chunking in improving the performance of both dense passage retrieval and the end-to-end RAG pipeline. We then introduce the Logits-Guided Multi-Granular Chunker (LGMGC), a novel framework that splits long documents into contextualized, self-contained chunks of varied granularity. Our experimental results, evaluated on two benchmark datasets, demonstrate that LGMGC not only improves the retrieval step but also outperforms existing chunking methods when integrated into a RAG pipeline.